Skip to content

Added xt::convolve#2507

Merged
JohanMabille merged 1 commit intoxtensor-stack:masterfrom
spectre-ns:convolve
Apr 15, 2022
Merged

Added xt::convolve#2507
JohanMabille merged 1 commit intoxtensor-stack:masterfrom
spectre-ns:convolve

Conversation

@spectre-ns
Copy link
Copy Markdown
Contributor

@spectre-ns spectre-ns commented Apr 3, 2022

Checklist

  • The title and commit message(s) are descriptive.
  • Small commits made to fix your PR have been squashed to avoid history pollution.
  • Tests have been added for new features or bug fixes.
  • API of new functions and classes are documented.

Description

Added a 1D convolution function with a similar API as numpy and provide a couple small test to verify it's functionality.
Fixes #1928

@spectre-ns
Copy link
Copy Markdown
Contributor Author

I noted where the code could be improved with new constexpr functionality released in C++17 I think this library strives for C++14 or newer compatibility correct?

Copy link
Copy Markdown
Member

@JohanMabille JohanMabille left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for tackling this!

Comment thread include/xtensor/xmath.hpp Outdated
* should v be longer than a.
*/
template <class E1, class E2, class E3>
inline auto convolve(E1&& a, E2&& v, E3&& mode)
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

E3 being the mode, it can be passed by value.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done!

Comment thread include/xtensor/xmath.hpp Outdated
auto temp = a;
a = v;
v = temp;
}
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A mean to avoid copy is to have an additional function performing the implementation that is called by this one:

namespace detail
{
    template <class E1, class E2, class E3>
    inline auto convolve_impl(E1&& e1, E2&& e2, E3 mode)
    {
        // Implementation with no checks
    }
}

template <class E1, class E2, class E3>
inline auto convolve(E1&& a, E2&& b, E3 mode)
{
    // Dimensions and shape checks
    // ...

    bool cond = a.shape(0) < v.shape(0);
    return cond ? detail::convole_impl(std::forward<E2>(v), std::forward<E1>(a), mode)
                       : detail::convole_impl(std::forward<E1>(a), std::forward<E2>(v), mode)
}

Comment thread include/xtensor/xmath.hpp Outdated
namespace conv_mode
{
struct valid{};
struct full {};
Copy link
Copy Markdown
Member

@JohanMabille JohanMabille Apr 5, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is that intentional to not implement the same mode?

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah I figured I could push that implementation off to another pull request because it can be implemented by trimming the full output.

Comment thread include/xtensor/xmath.hpp Outdated
v = temp;
}

//this could benefit from C++17 if constexpr
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This can be achieved with tag dispatching (although I'm pretty sure the compiler can optimize the current implementation)

template <class E1, class E2, class E3>
inline auto convolve_impl(E1&& e1, E2&& e2, E3 mode)
{
    XTENSOR_THROW(...);
}

template <class E1, class E2>
inline auto convolve_impl(E1&& e1, E2&& e2, conv_mode::full)
{
    // ...
}

template <class E1, class E2>
inline auto convolve_impl(E1&& e1, E2&& e2, conv_mode::valid)
{
    //....
}

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done! Didn't implement the generic case as it would be better to have compile time error over runtime.

Comment thread include/xtensor/xmath.hpp Outdated
size_t const nv = v.shape(0);
size_t const n = na + nv - 1;
xt::xarray<value_type> out = xt::zeros<value_type>({ n });
for (size_t i = 0; i < n; ++i) {
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nitpicking: an opening curly brace goes on a new line

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done!

Copy link
Copy Markdown
Member

@tdegeus tdegeus left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Awesome!

Comment thread include/xtensor/xmath.hpp Outdated
/*
* convolution mode placeholders
*/
namespace conv_mode
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would prefer the name convolve_mode I think.

Comment thread include/xtensor/xmath.hpp Outdated
Comment on lines +3029 to +3030
* @param a 1D expression
* @param v 1D convolutional kernel
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Very minor: This is a bit different than NumPy's documentation. In principle that is ok, it's just that one could argue that either of the arguments is the kernel of the other. So naming one 'kernel' might confuse?

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a non-issue no because I have removed the copy will make it symmetrical

Comment thread include/xtensor/xmath.hpp Outdated
*
* @param a 1D expression
* @param v 1D convolutional kernel
* @param mode placeholder from conv_mode namespace
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
* @param mode placeholder from conv_mode namespace
* @param mode placeholder Select algorithm from #conv_mode

Comment thread include/xtensor/xmath.hpp Outdated
}

/*
* convolution mode placeholders
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A brief description is on order I think. Also you could point out that these modes are the same as for NumPy.

Comment thread include/xtensor/xmath.hpp Outdated
{
using value_type = typename std::decay<E1>::type::value_type;

if (a.shape().size() != 1 || v.shape().size() != 1)
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
if (a.shape().size() != 1 || v.shape().size() != 1)
if (a.dimension() != 1 || v.dimension() != 1)

Comment thread include/xtensor/xmath.hpp Outdated
size_t const jmn = (i >= nv - 1) ? i - (nv - 1) : 0;
size_t const jmx = (i < na - 1) ? i : na - 1;
for (size_t j = jmn; j <= jmx; ++j) {
out[i] += (a(j) * v(i - j));
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
out[i] += (a(j) * v(i - j));
out(i) += a(j) * v(i - j);

(I don't think the braces are needed there

Comment thread include/xtensor/xmath.hpp Outdated
size_t const na = a.shape(0);
size_t const nv = v.shape(0);
int const n = std::max(na, nv) - std::min(na, nv) + 1;
xt::xarray<value_type> out = xt::zeros<value_type>({ n });
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

See above

Comment thread include/xtensor/xmath.hpp Outdated
xt::xarray<value_type> out = xt::zeros<value_type>({ n });
for (size_t i = 0; i < n; ++i) {
for (int j(v.shape(0) - 1), k(i); j >= 0; --j) {
out[i] += a(j) * v(k);
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
out[i] += a(j) * v(k);
out(i) += a(j) * v(k);

Comment thread include/xtensor/xmath.hpp Outdated
int const n = std::max(na, nv) - std::min(na, nv) + 1;
xt::xarray<value_type> out = xt::zeros<value_type>({ n });
for (size_t i = 0; i < n; ++i) {
for (int j(v.shape(0) - 1), k(i); j >= 0; --j) {
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
for (int j(v.shape(0) - 1), k(i); j >= 0; --j) {
for (int j = v.shape(0) - 1, k = i; j >= 0; --j) {

no?
Also I think that the int can be avoided, e.g. https://stackoverflow.com/a/23244694/2646505

But most importantly, why is it even relevant to loop backward?

Comment thread test/test_xmath.cpp Outdated

TEST(xmath, convolve_full)
{
xt::xarray<double> x = { 1.0, 1.0, 1.0 };
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It be'd good to add another test which does not only involve unity inputs

Copy link
Copy Markdown
Member

@tdegeus tdegeus left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks! Did not have time to check the conditions in detail

Comment thread include/xtensor/xmath.hpp Outdated
Comment thread include/xtensor/xmath.hpp Outdated
Comment thread include/xtensor/xmath.hpp Outdated
Comment thread include/xtensor/xmath.hpp Outdated
Comment thread include/xtensor/xmath.hpp Outdated
Comment thread include/xtensor/xmath.hpp Outdated
Comment thread include/xtensor/xmath.hpp Outdated
Comment thread include/xtensor/xmath.hpp Outdated
@spectre-ns
Copy link
Copy Markdown
Contributor Author

@tdegeus I accepted all your suggestions! Thanks!

@spectre-ns spectre-ns requested a review from tdegeus April 12, 2022 22:29
Copy link
Copy Markdown
Member

@tdegeus tdegeus left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nitpicking. Other then that things look good to me, thanks! A side note: I'm surprised that NumPy does not support a periodic version. True, one could (and probably should) use Fourier for that. But still

Comment thread include/xtensor/xmath.hpp Outdated
Comment thread include/xtensor/xmath.hpp Outdated
@spectre-ns
Copy link
Copy Markdown
Contributor Author

Nitpicking. Other then that things look good to me, thanks! A side note: I'm surprised that NumPy does not support a periodic version. True, one could (and probably should) use Fourier for that. But still

I think a periodic convolution in the time domain would simply be a wrapped convolution? But yeah Fourier does the wrapping implicitly so best to do that. Scipy probably offers that functionality.

@tdegeus
Copy link
Copy Markdown
Member

tdegeus commented Apr 13, 2022

Final question. Could you squash your commits? We prefer here to not have GitHub do that, such that we have an easier way to generate a changeling (if I remember well, I think that nowadays GitHub could do it, still in the mean-time...). Thanks!

@spectre-ns
Copy link
Copy Markdown
Contributor Author

Final question. Could you squash your commits? We prefer here to not have GitHub do that, such that we have an easier way to generate a changeling (if I remember well, I think that nowadays GitHub could do it, still in the mean-time...). Thanks!

I definitely will do that. When I submitted there were only two commits. I appreciate the thorough review!

@spectre-ns spectre-ns closed this Apr 13, 2022
@spectre-ns
Copy link
Copy Markdown
Contributor Author

spectre-ns commented Apr 13, 2022

This is embarrassing... I messed hopefully I didn't break the pr...

@spectre-ns spectre-ns reopened this Apr 13, 2022
@tdegeus
Copy link
Copy Markdown
Member

tdegeus commented Apr 13, 2022

Still looks ok, the merge commit was not necessary I guess.
It's just empty so squashing it again should get rid of it.

@spectre-ns
Copy link
Copy Markdown
Contributor Author

Still looks ok, the merge commit was not necessary I guess. It's just empty so squashing it again should get rid of it.

Fixed it

@spectre-ns spectre-ns closed this Apr 13, 2022
@spectre-ns spectre-ns reopened this Apr 13, 2022
@spectre-ns spectre-ns closed this Apr 13, 2022
@spectre-ns spectre-ns reopened this Apr 13, 2022
@JohanMabille
Copy link
Copy Markdown
Member

Awesome! Thanks a lot!

@JohanMabille JohanMabille merged commit fd7a610 into xtensor-stack:master Apr 15, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Request to implement Convolve and Correlate equivalent to NumPy versions.

3 participants